Supplement to “ Bayesian optimization explains human active search ” NIPS 2013 [ 1 ]
نویسنده
چکیده
Figure 1: Illustration of stimuli used during test trials. Functions are reconstructed from human clicked locations. F18 is the Dirac function with a non-zero element at-270.
منابع مشابه
Bayesian optimization explains human active search
Many real-world problems have complicated objective functions. To optimize such functions, humans utilize sophisticated sequential decision-making strategies. Many optimization algorithms have also been developed for this same purpose, but how do they compare to humans in terms of both performance and behavior? We try to unravel the general underlying algorithm people may be using while searchi...
متن کاملA* Lasso for Learning a Sparse Bayesian Network Structure for Continuous Variables
We address the problem of learning a sparse Bayesian network structure for continuous variables in a high-dimensional space. The constraint that the estimated Bayesian network structure must be a directed acyclic graph (DAG) makes the problem challenging because of the huge search space of network structures. Most previous methods were based on a two-stage approach that prunes the search space ...
متن کاملNear-Optimal Bayesian Active Learning with Noisy Observations
We tackle the fundamental problem of Bayesian active learning with noise, where we need to adaptively select from a number of expensive tests in order to identify an unknown hypothesis sampled from a known prior distribution. In the case of noise–free observations, a greedy algorithm called generalized binary search (GBS) is known to perform near–optimally. We show that if the observations are ...
متن کاملMulti-Task Bayesian Optimization
Bayesian optimization has recently been proposed as a framework for automatically tuning the hyperparameters of machine learning models and has been shown to yield state-of-the-art performance with impressive ease and efficiency. In this paper, we explore whether it is possible to transfer the knowledge gained from previous optimizations to new tasks in order to find optimal hyperparameter sett...
متن کاملLookahead Bayesian Optimization with Inequality Constraints
We consider the task of optimizing an objective function subject to inequality constraints when both the objective and the constraints are expensive to evaluate. Bayesian optimization (BO) is a popular way to tackle optimization problems with expensive objective function evaluations, but has mostly been applied to unconstrained problems. Several BO approaches have been proposed to address expen...
متن کامل